Goto

Collaborating Authors

 deep learning take


Deep Learning Takes on Synthetic Biology

#artificialintelligence

The collaboration between data scientists from the Wyss Institute's Predictive BioAnalytics Initiative and synthetic biologists in Wyss Core Faculty member Jim Collins' lab at MIT was created to apply the computational power of machine learning, neural networks, and other algorithmic architectures to complex problems in biology that have so far defied resolution. As a proving ground for their approach, the two teams focused on a specific class of engineered RNA molecules: toehold switches, which are folded into a hairpin-like shape in their "off" state. When a complementary RNA strand binds to a "trigger" sequence trailing from one end of the hairpin, the toehold switch unfolds into its "on" state and exposes sequences that were previously hidden within the hairpin, allowing ribosomes to bind to and translate a downstream gene into protein molecules. This precise control over the expression of genes in response to the presence of a given molecule makes toehold switches very powerful components for sensing substances in the environment, detecting disease, and other purposes.


Going Beyond Human Brains: Deep Learning Takes On Synthetic Biology

#artificialintelligence

Work by Wyss Core Faculty member Peng Yin in collaboration with Collins and others has demonstrated that different toehold switches can be combined to compute the presence of multiple "triggers," similar to a computer's logic board. DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables -- a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence.


Going Beyond Human Brains: Deep Learning Takes On Synthetic Biology – IAM Network

#artificialintelligence

Work by Wyss Core Faculty member Peng Yin in collaboration with Collins and others has demonstrated that different toehold switches can be combined to compute the presence of multiple "triggers," similar to a computer's logic board. Credit: Wyss Institute at Harvard University DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables -- a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence.


Deep learning takes on tumours

#artificialintelligence

As cancer cells spread in a culture dish, Guillaume Jacquemet is watching. The cell movements hold clues to how drugs or gene variants might affect the spread of tumours in the body, and he is tracking the nucleus of each cell in frame after frame of time-lapse microscopy films. But because he has generated about 500 films, each with 120 frames and 200–300 cells per frame, that analysis is challenging to say the least. "If I had to do the tracking manually, it would be impossible," says Jacquemet, a cell biologist at Åbo Akademi University in Turku, Finland. So he has trained a machine to spot the nuclei instead.


Demystifying AI – The AI explosion

#artificialintelligence

This is an article I had originally written as part of a stream of work that has now been put on hold indefinitely. I thought it a shame for it to languish in OneNote. Well that is a very good question. To be perfectly frank, not that much has changed of late in the world of Artificial Intelligence (AI) as a whole that should justify all the current excitement. That's not to say that there isn't cool stuff going on; there really is great progress being made… in the world of Machine Learning.


How long will patient live? Deep Learning takes on predictions

#artificialintelligence

End of life care might be improved with Deep Learning. An AI program in a successful pilot study predicted how long people will live. George Dvorsky in Gizmodo and others reported on their work. The Stanford University team is using an algorithm to predict mortality, and their goal is to improve timing of end-of-life care for critically ill patients. While 80 percent of Americans prefer to spend their final days in their home, only 20 percent do just that.


How long will patient live? Deep Learning takes on predictions

#artificialintelligence

End of life care might be improved with Deep Learning. An AI program in a successful pilot study predicted how long people will live. George Dvorsky in Gizmodo and others reported on their work. The Stanford University team is using an algorithm to predict mortality, and their goal is to improve timing of end-of-life care for critically ill patients. While 80 percent of Americans prefer to spend their final days in their home, only 20 percent do just that.


Google Brain chief: Deep learning takes at least 100,000 examples

@machinelearnbot

While the current class of deep learning techniques is helping fuel the AI wave, one of the frequently cited drawbacks is that they require a lot of data to work. But how much is enough data? "I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things," Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. "If you only have 10 examples of something, it's going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that's the kind of scale where you should really start thinking about these kinds of techniques."


Deep Learning Takes on Translation

Communications of the ACM

Over the last few years, data-intensive machine-learning techniques have made dramatic strides in speech recognition and image analysis. Now these methods are making significant advances on another long-standing challenge: translation of written text between languages. Until a couple of years ago, the steady progress in machine translation had always been dominated by Google, with its well-supported phrase-based statistical analysis, said Kyunghyun Cho, an assistant professor of computer science and data science at New York University (NYU). However, in 2015, Cho (then a post-doc in Yoshua Bengio's group at the University of Montreal) and others brought neural-network-based statistical approaches to the annual Workshop on Machine Translation (WMT 15), and for the first time, the "Google translation was not doing better than any of those academic systems." Since then, "Google has been really quick in adapting this (neural network) technology" for translation, Cho observed.


In Machine Learning, What is Better: More Data or better Algorithms

#artificialintelligence

"In machine learning, is more data always better than better algorithms?" No. There are times when more data helps, there are times when it doesn't. Probably one of the most famous quotes defending the power of data is that of Google's Research Director Peter Norvig claiming that "We don't have better algorithms. We just have more data.". This quote is usually linked to the article on "The Unreasonable Effectiveness of Data", co-authored by Norvig himself (you should probably be able to find the pdf on the web although the original is behind the IEEE paywall).